33 research outputs found
Deep Predictive Policy Training using Reinforcement Learning
Skilled robot task learning is best implemented by predictive action policies
due to the inherent latency of sensorimotor processes. However, training such
predictive policies is challenging as it involves finding a trajectory of motor
activations for the full duration of the action. We propose a data-efficient
deep predictive policy training (DPPT) framework with a deep neural network
policy architecture which maps an image observation to a sequence of motor
activations. The architecture consists of three sub-networks referred to as the
perception, policy and behavior super-layers. The perception and behavior
super-layers force an abstraction of visual and motor data trained with
synthetic and simulated training samples, respectively. The policy super-layer
is a small sub-network with fewer parameters that maps data in-between the
abstracted manifolds. It is trained for each task using methods for policy
search reinforcement learning. We demonstrate the suitability of the proposed
architecture and learning framework by training predictive policies for skilled
object grasping and ball throwing on a PR2 robot. The effectiveness of the
method is illustrated by the fact that these tasks are trained using only about
180 real robot attempts with qualitative terminal rewards.Comment: This work is submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems 2017 (IROS2017
Meta Reinforcement Learning for Sim-to-real Domain Adaptation
Modern reinforcement learning methods suffer from low sample efficiency and
unsafe exploration, making it infeasible to train robotic policies entirely on
real hardware. In this work, we propose to address the problem of sim-to-real
domain transfer by using meta learning to train a policy that can adapt to a
variety of dynamic conditions, and using a task-specific trajectory generation
model to provide an action space that facilitates quick exploration. We
evaluate the method by performing domain adaptation in simulation and analyzing
the structure of the latent space during adaptation. We then deploy this policy
on a KUKA LBR 4+ robot and evaluate its performance on a task of hitting a
hockey puck to a target. Our method shows more consistent and stable domain
adaptation than the baseline, resulting in better overall performance.Comment: Submitted to ICRA 202